Goto

Collaborating Authors

 legacy system


Proposing a Framework for Machine Learning Adoption on Legacy Systems

Rahman, Ashiqur, Alhoori, Hamed

arXiv.org Artificial Intelligence

The integration of machine learning (ML) is critical for industrial competitiveness, yet its adoption is frequently stalled by the prohibitive costs and operational disruptions of upgrading legacy systems. The financial and logistical overhead required to support the full ML lifecycle presents a formidable barrier to widespread implementation, particularly for small and medium-sized enterprises. This paper introduces a pragmatic, API-based framework designed to overcome these challenges by strategically decoupling the ML model lifecycle from the production environment. Our solution delivers the analytical power of ML to domain experts through a lightweight, browser-based interface, eliminating the need for local hardware upgrades and ensuring model maintenance can occur with zero production downtime. This human-in-the-loop approach empowers experts with interactive control over model parameters, fostering trust and facilitating seamless integration into existing workflows. By mitigating the primary financial and operational risks, this framework offers a scalable and accessible pathway to enhance production quality and safety, thereby strengthening the competitive advantage of the manufacturing sector.


BlackBoxToBlueprint: Extracting Interpretable Logic from Legacy Systems using Reinforcement Learning and Counterfactual Analysis

Rathore, Vidhi

arXiv.org Artificial Intelligence

Modernizing legacy software systems is a critical but challenging task, often hampered by a lack of documentation and understanding of the original system's intricate decision logic. Traditional approaches like behavioral cloning merely replicate input-output behavior without capturing the underlying intent. This paper proposes a novel pipeline to automatically extract interpretable decision logic from legacy systems treated as black boxes. The approach uses a Reinforcement Learning (RL) agent to explore the input space and identify critical decision boundaries by rewarding actions that cause meaningful changes in the system's output. These counterfactual state transitions, where the output changes, are collected and clustered using K-Means. Decision trees are then trained on these clusters to extract human-readable rules that approximate the system's decision logic near the identified boundaries. I demonstrated the pipeline's effectiveness on three dummy legacy systems with varying complexity, including threshold-based, combined-conditional, and non-linear range logic. Results show that the RL agent successfully focuses exploration on relevant boundary regions, and the extracted rules accurately reflect the core logic of the underlying dummy systems, providing a promising foundation for generating specifications and test cases during legacy migration.


MONO2REST: Identifying and Exposing Microservices: a Reusable RESTification Approach

Lecrivain, Matthéo, Barry, Hanifa, Tamzalit, Dalila, Sahraoui, Houari

arXiv.org Artificial Intelligence

The microservices architectural style has become the de facto standard for large-scale cloud applications, offering numerous benefits in scalability, maintainability, and deployment flexibility. Many organizations are pursuing the migration of legacy monolithic systems to a microservices architecture. However, this process is challenging, risky, time-intensive, and prone-to-failure while several organizations lack necessary financial resources, time, or expertise to set up this migration process. So, rather than trying to migrate a legacy system where migration is risky or not feasible, we suggest exposing it as a microservice application without without having to migrate it. In this paper, we present a reusable, automated, two-phase approach that combines evolutionary algorithms with machine learning techniques. In the first phase, we identify microservices at the method level using a multi-objective genetic algorithm that considers both structural and semantic dependencies between methods. In the second phase, we generate REST APIs for each identified microservice using a classification algorithm to assign HTTP methods and endpoints. We evaluated our approach with a case study on the Spring PetClinic application, which has both monolithic and microservices implementations that serve as ground truth for comparison. Results demonstrate that our approach successfully aligns identified microservices with those in the reference microservices implementation, highlighting its effectiveness in service identification and API generation.


Breaking the Cycle of Recurring Failures: Applying Generative AI to Root Cause Analysis in Legacy Banking Systems

Jin, Siyuan, Bei, Zhendong, Chen, Bichao, Xia, Yong

arXiv.org Artificial Intelligence

Traditional banks face significant challenges in digital transformation, primarily due to legacy system constraints and fragmented ownership. Recent incidents show that such fragmentation often results in superficial incident resolutions, leaving root causes unaddressed and causing recurring failures. We introduce a novel approach to post-incident analysis, integrating knowledge-based GenAI agents with the "Five Whys" technique to examine problem descriptions and change request data. This method uncovered that approximately 70% of the incidents previously attributed to management or vendor failures were due to underlying internal code issues. We present a case study to show the impact of our method. By scanning over 5,000 projects, we identified over 400 files with a similar root cause. Overall, we leverage the knowledge-based agents to automate and elevate root cause analysis, transforming it into a more proactive process. These agents can be applied across other phases of the software development lifecycle, further improving development processes.


Enhancing Supervised Learning with Contrastive Markings in Neural Machine Translation Training

Berger, Nathaniel, Exel, Miriam, Huck, Matthias, Riezler, Stefan

arXiv.org Artificial Intelligence

Supervised learning in Neural Machine Translation (NMT) typically follows a teacher forcing paradigm where reference tokens constitute the conditioning context in the model's prediction, instead of its own previous predictions. In order to alleviate this lack of exploration in the space of translations, we present a simple extension of standard maximum likelihood estimation by a contrastive marking objective. The additional training signals are extracted automatically from reference translations by comparing the system hypothesis against the reference, and used for up/down-weighting correct/incorrect tokens. The proposed new training procedure requires one additional translation pass over the training set per epoch, and does not alter the standard inference setup. We show that training with contrastive markings yields improvements on top of supervised learning, and is especially useful when learning from postedits where contrastive markings indicate human error corrections to the original hypotheses. Code is publicly released.


Artificial Intelligence: A Reality Check - AnalyticsWeek

#artificialintelligence

Artificial Intelligence (AI) is the new black, the shiny new object, the answer to every marketer's prayers, and the end of creativity. The recent emergence of AI from the arcane halls of academia and the backrooms of data science has been prompted by stories of drones, robots and driverless cars undertaken by tech giants like Amazon. But the hype exceeds the day-to-day reality. AI has a fifty-year history of mathematical and computer science development, experimentation and thought. What makes it exciting is the confluence of large data sets, improved platforms and software, faster and more robust processing capabilities and a growing cadre of data scientists eager to exploit a wider range of applications.


Council Post: How To Create A Data Platform For Sustainable Data-Driven Transformation

#artificialintelligence

Leon Gordon is a leader in data analytics, a current Microsoft MVP based in the U.K. and a partner at Pomerol Partners. The digital revolution has set off near-panic among executives as they seek to find ways to use data and analytics to enhance their decision-making ability. The end result of data transformation is the creation of a digital core, a set of capabilities that are used to power the entire organization. Companies seeking to transform themselves with AI must establish a strategic plan for using analytics and insights derived from data. Organizational structure is important when considering an AI strategy because it can help ensure data are properly transferred and secure as they move through their lifecycle.


Bank of England reports on AI in financial services - LoupedIn

#artificialintelligence

The Bank of England has published its report "Machine Learning in UK Financial Services". The report sets out its findings, following a survey of around a hundred regulated firms in the UK. It highlights the growing use of machine learning, especially in insurance, and the challenges of explainability, legacy systems, the skills gap and regulatory uncertainty. The number of UK financial services firms using or developing machine learning (ML) applications is increasing, and this trend is set to continue across a greater range of business areas within financial services. The largest expected increase in use, in absolute terms, is in the insurance sector, followed by banking.


How to Overcome Challenges in Digital Transformation

#artificialintelligence

It's the holistic approach to developing better processes, improving employee engagement, and optimizing your business for long-term growth by adopting the proper technology. Complex endeavor, you should expect some resistance along the way and be prepared to overcome challenges. Let's start with the basics. A digital transformation uses technology to help improve business operations and customer relationships. It can be as simple as ensuring your website works properly on all devices or as complex as using artificial intelligence to automate specific tasks.


The Perks and Obstacles of AI Adoption in Insurance

#artificialintelligence

Imagine that you are a leader at an insurance company. You know that artificial intelligence (AI) will give you a competitive edge and have decided to invest. You hired two brilliant data scientists, Juana and Yash. Juana develops an AI solution that scans digitized customer files, mines them for relevant information, and calculates accurate pay-outs. You project savings of over $1 million in the next 2 years, and 30% increased staff productivity.